Goto

Collaborating Authors

 deep video portrait


The New AI Tech Turning Heads in Video Manipulation

#artificialintelligence

A new technique using artificial intelligence to manipulate video content gives new meaning to the expression "talking head." An international team of researchers showcased the latest advancement in synthesizing facial expressions--including mouth, eyes, eyebrows, and even head position--in video at this month's 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry. The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a "target" actor based on the facial and head movement of a "source" actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.


You thought fake news was bad? Deep fakes are where truth goes to die

The Guardian

In May, a video appeared on the internet of Donald Trump offering advice to the people of Belgium on the issue of climate change. "As you know, I had the balls to withdraw from the Paris climate agreement," he said, looking directly into the camera, "and so should you." The video was created by a Belgian political party, Socialistische Partij Anders, or sp.a, and posted on sp.a's Twitter and Facebook. It provoked hundreds of comments, many expressing outrage that the American president would dare weigh in on Belgium's climate policy. One woman wrote: "Humpy Trump needs to look at his own country with his deranged child killers who just end up with the heaviest weapons in schools."


This AI system could make lip sync dubbing accurate

#artificialintelligence

Toronto, Aug 19 (IANS) Dodgy lip sync dubbing could soon become a thing of the past as researchers have developed an Artificial Intelligence (AI)-based system that can edit the facial expressions of actors to accurately match dubbed voices. The system, called Deep Video Portraits, can also be used to correct gaze and head pose in video conferencing, and enables new possibilities for video post-production and visual effects, according to the research presented at the SIGGRAPH 2018 conference in Vancouver, Canada. "This technique could also be used for post-production in the film industry where computer graphics editing of faces is already widely used in today's feature films," said study co-author Christian Richardt from the University of Bath in Britain. The researchers believe that the new system could help the film industry save time and reduce post-production costs. Unlike previous methods that are focused on movements of the face interior only, Deep Video Portraits can also animate the whole face including eyes, eyebrows, and head position in videos, using controls known from computer graphics face animation.


AI could make dodgy lip sync dubbing a thing of the past

#artificialintelligence

The technique was developed by an international team led by a group from the Max Planck Institute for Informatics and including researchers from the University of Bath, Technicolor, TU Munich and Stanford University. The work, called Deep Video Portraits, was presented for the first time at the SIGGRAPH 2018 conference in Vancouver on 16th August. Unlike previous methods that are focused on movements of the face interior only, Deep Video Portraits can also animate the whole face including eyes, eyebrows, and head position in videos, using controls known from computer graphics face animation. It can even synthesise a plausible static video background if the head is moved around. Hyeongwoo Kim from the Max Planck Institute for Informatics explains: "It works by using model-based 3D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video. The research is currently at the proof-of-concept stage and is yet to work at real time, however the researchers anticipate the approach could make a real difference to the visual entertainment industry. Professor Christian Theobalt, from the Max Planck Institute for Informatics, said: "Despite extensive post-production manipulation, dubbing films into foreign languages always presents a mismatch between the actor on screen and the dubbed voice.


Most Deepfake Videos Have One Glaring Flaw

#artificialintelligence

The rate at which deepfake videos are advancing is both impressive and deeply unsettling. But researchers have described a new method for detecting a "telltale sign" of these manipulated videos, which map one person's face onto the body of another. It's a flaw even the average person would notice: a lack of blinking. Researchers from the University at Albany, SUNY's computer science department recently published a paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking." The paper details how they combined two neural networks to more effectively expose synthesized face videos, which often overlook "spontaneous and involuntary physiological activities such as breathing, pulse and eye movement." The researchers note that the mean resting blink rate for humans is 17 blinks per minute, which increases to 26 blinks per minute when someone is talking, and decreases to 4.5 blinks per minute when someone is reading.


Forget DeepFakes, Deep Video Portraits are way better (and worse)

#artificialintelligence

The strange, creepy world of "deepfakes," videos (often explicit) with the faces of the subjects replaced by those of celebrities, set off alarm bells just about everywhere early this year. And in case you thought that sort of thing had gone away because people found it unethical or unconvincing, the practice is back with the highly convincing "Deep Video Portraits," which refines and improves the technique. To be clear, I don't want to conflate this interesting research with the loathsome practice of putting celebrity faces on adult film star bodies. But this application of technology is clearly here to stay and it's only going to get better -- so we had best keep pace with it so we don't get taken by surprise. Deep Video Portraits is the title of a paper submitted for consideration this August at SIGGRAPH; it describes an improved technique for reproducing the motions, facial expressions, and speech movements of one person using the face of another.